Learning Bayesian Networks

Previous notebooks showed how Bayesian networks economically encode a probability distribution over a set of variables, and how they can be used e.g. to predict variable states, or to generate new samples from the joint distribution. This section will be about obtaining a Bayesian network, given a set of sample data. Learning a Bayesian network can be split into two problems:

Parameter learning: Given a set of data samples and a DAG that captures the dependencies between the variables, estimate the (conditional) probability distributions of the individual variables.

Structure learning: Given a set of data samples, estimate a DAG that captures the dependencies between the variables.

This notebook aims to illustrate how parameter learning and structure learning can be done with pgmpy. Currently, the library supports:

  • Parameter learning for discrete nodes:
    • Maximum Likelihood Estimation
    • Bayesian Estimation
  • Structure learning for discrete, fully observed networks:
    • Score-based structure estimation (BIC/BDeu/K2 score; exhaustive search, hill climb/tabu search)
    • Constraint-based structure estimation (PC)
    • Hybrid structure estimation (MMHC)

Parameter Learning

Suppose we have the following data:


In [1]:
import pandas as pd
data = pd.DataFrame(data={'fruit': ["banana", "apple", "banana", "apple", "banana","apple", "banana", 
                                    "apple", "apple", "apple", "banana", "banana", "apple", "banana",], 
                          'tasty': ["yes", "no", "yes", "yes", "yes", "yes", "yes", 
                                    "yes", "yes", "yes", "yes", "no", "no", "no"], 
                          'size': ["large", "large", "large", "small", "large", "large", "large",
                                    "small", "large", "large", "large", "large", "small", "small"]})
print(data)


     fruit tasty   size
0   banana   yes  large
1    apple    no  large
2   banana   yes  large
3    apple   yes  small
4   banana   yes  large
5    apple   yes  large
6   banana   yes  large
7    apple   yes  small
8    apple   yes  large
9    apple   yes  large
10  banana   yes  large
11  banana    no  large
12   apple    no  small
13  banana    no  small

We know that the variables relate as follows:


In [2]:
from pgmpy.models import BayesianModel

model = BayesianModel([('fruit', 'tasty'), ('size', 'tasty')])  # fruit -> tasty <- size

Parameter learning is the task to estimate the values of the conditional probability distributions (CPDs), for the variables fruit, size, and tasty.

State counts

To make sense of the given data, we can start by counting how often each state of the variable occurs. If the variable is dependent on parents, the counts are done conditionally on the parents states, i.e. for seperately for each parent configuration:


In [3]:
from pgmpy.estimators import ParameterEstimator
pe = ParameterEstimator(model, data)
print("\n", pe.state_counts('fruit'))  # unconditional
print("\n", pe.state_counts('tasty'))  # conditional on fruit and size


         fruit
apple       7
banana      7

 fruit apple       banana      
size  large small  large small
tasty                         
no      1.0   1.0    1.0   1.0
yes     3.0   2.0    5.0   0.0

We can see, for example, that as many apples as bananas were observed and that 5 large bananas were tasty, while only 1 was not.

Maximum Likelihood Estimation

A natural estimate for the CPDs is to simply use the relative frequencies, with which the variable states have occured. We observed 7 apples among a total of 14 fruits, so we might guess that about 50% of fruits are apples.

This approach is Maximum Likelihood Estimation (MLE). According to MLE, we should fill the CPDs in such a way, that $P(\text{data}|\text{model})$ is maximal. This is achieved when using the relative frequencies. See [1], section 17.1 for an introduction to ML parameter estimation. pgmpy supports MLE as follows:


In [4]:
from pgmpy.estimators import MaximumLikelihoodEstimator
mle = MaximumLikelihoodEstimator(model, data)
print(mle.estimate_cpd('fruit'))  # unconditional
print(mle.estimate_cpd('tasty'))  # conditional


+---------------+-----+
| fruit(apple)  | 0.5 |
+---------------+-----+
| fruit(banana) | 0.5 |
+---------------+-----+
+------------+--------------+--------------------+---------------------+---------------+
| fruit      | fruit(apple) | fruit(apple)       | fruit(banana)       | fruit(banana) |
+------------+--------------+--------------------+---------------------+---------------+
| size       | size(large)  | size(small)        | size(large)         | size(small)   |
+------------+--------------+--------------------+---------------------+---------------+
| tasty(no)  | 0.25         | 0.3333333333333333 | 0.16666666666666666 | 1.0           |
+------------+--------------+--------------------+---------------------+---------------+
| tasty(yes) | 0.75         | 0.6666666666666666 | 0.8333333333333334  | 0.0           |
+------------+--------------+--------------------+---------------------+---------------+

mle.estimate_cpd(variable) computes the state counts and divides each cell by the (conditional) sample size. The mle.get_parameters()-method returns a list of CPDs for all variable of the model.

The built-in fit()-method of BayesianModel provides more convenient access to parameter estimators:


In [5]:
# Calibrate all CPDs of `model` using MLE:
model.fit(data, estimator=MaximumLikelihoodEstimator)

While very straightforward, the ML estimator has the problem of overfitting to the data. In above CPD, the probability of a large banana being tasty is estimated at 0.833, because 5 out of 6 observed large bananas were tasty. Fine. But note that the probability of a small banana being tasty is estimated at 0.0, because we observed only one small banana and it happened to be not tasty. But that should hardly make us certain that small bananas aren't tasty! We simply do not have enough observations to rely on the observed frequencies. If the observed data is not representative for the underlying distribution, ML estimations will be extremly far off.

When estimating parameters for Bayesian networks, lack of data is a frequent problem. Even if the total sample size is very large, the fact that state counts are done conditionally for each parents configuration causes immense fragmentation. If a variable has 3 parents that can each take 10 states, then state counts will be done seperately for 10^3 = 1000 parents configurations. This makes MLE very fragile and unstable for learning Bayesian Network parameters. A way to mitigate MLE's overfitting is Bayesian Parameter Estimation.

Bayesian Parameter Estimation

The Bayesian Parameter Estimator starts with already existing prior CPDs, that express our beliefs about the variables before the data was observed. Those "priors" are then updated, using the state counts from the observed data. See [1], Section 17.3 for a general introduction to Bayesian estimators.

One can think of the priors as consisting in pseudo state counts, that are added to the actual counts before normalization. Unless one wants to encode specific beliefs about the distributions of the variables, one commonly chooses uniform priors, i.e. ones that deem all states equiprobable.

A very simple prior is the so-called K2 prior, which simply adds 1 to the count of every single state. A somewhat more sensible choice of prior is BDeu (Bayesian Dirichlet equivalent uniform prior). For BDeu we need to specify an equivalent sample size N and then the pseudo-counts are the equivalent of having observed N uniform samples of each variable (and each parent configuration). In pgmpy:


In [6]:
from pgmpy.estimators import BayesianEstimator
est = BayesianEstimator(model, data)

print(est.estimate_cpd('tasty', prior_type='BDeu', equivalent_sample_size=10))


+------------+---------------------+--------------------+--------------------+---------------------+
| fruit      | fruit(apple)        | fruit(apple)       | fruit(banana)      | fruit(banana)       |
+------------+---------------------+--------------------+--------------------+---------------------+
| size       | size(large)         | size(small)        | size(large)        | size(small)         |
+------------+---------------------+--------------------+--------------------+---------------------+
| tasty(no)  | 0.34615384615384615 | 0.4090909090909091 | 0.2647058823529412 | 0.6428571428571429  |
+------------+---------------------+--------------------+--------------------+---------------------+
| tasty(yes) | 0.6538461538461539  | 0.5909090909090909 | 0.7352941176470589 | 0.35714285714285715 |
+------------+---------------------+--------------------+--------------------+---------------------+

The estimated values in the CPDs are now more conservative. In particular, the estimate for a small banana being not tasty is now around 0.64 rather than 1.0. Setting equivalent_sample_size to 10 means that for each parent configuration, we add the equivalent of 10 uniform samples (here: +5 small bananas that are tasty and +5 that aren't).

BayesianEstimator, too, can be used via the fit()-method. Full example:


In [7]:
import numpy as np
import pandas as pd
from pgmpy.models import BayesianModel
from pgmpy.estimators import BayesianEstimator

# generate data
data = pd.DataFrame(np.random.randint(low=0, high=2, size=(5000, 4)), columns=['A', 'B', 'C', 'D'])
model = BayesianModel([('A', 'B'), ('A', 'C'), ('D', 'C'), ('B', 'D')])

model.fit(data, estimator=BayesianEstimator, prior_type="BDeu") # default equivalent_sample_size=5
for cpd in model.get_cpds():
    print(cpd)


+------+----------+
| A(0) | 0.503996 |
+------+----------+
| A(1) | 0.496004 |
+------+----------+
+------+-------------------+---------------------+
| A    | A(0)              | A(1)                |
+------+-------------------+---------------------+
| B(0) | 0.499207135777998 | 0.49838872104733134 |
+------+-------------------+---------------------+
| B(1) | 0.500792864222002 | 0.5016112789526687  |
+------+-------------------+---------------------+
+------+---------------------+--------------------+--------------------+-------------------+
| A    | A(0)                | A(0)               | A(1)               | A(1)              |
+------+---------------------+--------------------+--------------------+-------------------+
| D    | D(0)                | D(1)               | D(0)               | D(1)              |
+------+---------------------+--------------------+--------------------+-------------------+
| C(0) | 0.5066810768323836  | 0.4908018396320736 | 0.4844929606202816 | 0.511135414595347 |
+------+---------------------+--------------------+--------------------+-------------------+
| C(1) | 0.49331892316761644 | 0.5091981603679264 | 0.5155070393797184 | 0.488864585404653 |
+------+---------------------+--------------------+--------------------+-------------------+
+------+---------------------+---------------------+
| B    | B(0)                | B(1)                |
+------+---------------------+---------------------+
| D(0) | 0.49959943921490085 | 0.49840542156667333 |
+------+---------------------+---------------------+
| D(1) | 0.5004005607850991  | 0.5015945784333267  |
+------+---------------------+---------------------+

Structure Learning

To learn model structure (a DAG) from a data set, there are two broad techniques:

  • score-based structure learning
  • constraint-based structure learning

The combination of both techniques allows further improvement:

  • hybrid structure learning

We briefly discuss all approaches and give examples.

Score-based Structure Learning

This approach construes model selection as an optimization task. It has two building blocks:

  • A scoring function $s_D\colon M \to \mathbb R$ that maps models to a numerical score, based on how well they fit to a given data set $D$.
  • A search strategy to traverse the search space of possible models $M$ and select a model with optimal score.

Scoring functions

Commonly used scores to measure the fit between model and data are Bayesian Dirichlet scores such as BDeu or K2 and the Bayesian Information Criterion (BIC, also called MDL). See [1], Section 18.3 for a detailed introduction on scores. As before, BDeu is dependent on an equivalent sample size.


In [8]:
import pandas as pd
import numpy as np
from pgmpy.estimators import BDeuScore, K2Score, BicScore
from pgmpy.models import BayesianModel

# create random data sample with 3 variables, where Z is dependent on X, Y:
data = pd.DataFrame(np.random.randint(0, 4, size=(5000, 2)), columns=list('XY'))
data['Z'] = data['X'] + data['Y']

bdeu = BDeuScore(data, equivalent_sample_size=5)
k2 = K2Score(data)
bic = BicScore(data)

model1 = BayesianModel([('X', 'Z'), ('Y', 'Z')])  # X -> Z <- Y
model2 = BayesianModel([('X', 'Z'), ('X', 'Y')])  # Y <- X -> Z


print(bdeu.score(model1))
print(k2.score(model1))
print(bic.score(model1))

print(bdeu.score(model2))
print(k2.score(model2))
print(bic.score(model2))


-13937.875339249586
-14328.68417117677
-14293.914012166675
-20904.92154575061
-20931.743864925047
-20948.966605351194

While the scores vary slightly, we can see that the correct model1 has a much higher score than model2. Importantly, these scores decompose, i.e. they can be computed locally for each of the variables given their potential parents, independent of other parts of the network:


In [9]:
print(bdeu.local_score('Z', parents=[]))
print(bdeu.local_score('Z', parents=['X']))
print(bdeu.local_score('Z', parents=['X', 'Y']))


-9191.675817737154
-6993.847644065196
-57.120187742958706

Search strategies

The search space of DAGs is super-exponential in the number of variables and the above scoring functions allow for local maxima. The first property makes exhaustive search intractable for all but very small networks, the second prohibits efficient local optimization algorithms to always find the optimal structure. Thus, identifiying the ideal structure is often not tractable. Despite these bad news, heuristic search strategies often yields good results.

If only few nodes are involved (read: less than 5), ExhaustiveSearch can be used to compute the score for every DAG and returns the best-scoring one:


In [10]:
from pgmpy.estimators import ExhaustiveSearch

es = ExhaustiveSearch(data, scoring_method=bic)
best_model = es.estimate()
print(best_model.edges())

print("\nAll DAGs by score:")
for score, dag in reversed(es.all_scores()):
    print(score, dag.edges())


[('X', 'Z'), ('Y', 'Z')]

All DAGs by score:
-14293.914012166677 [('X', 'Z'), ('Y', 'Z')]
-14328.333029363766 [('X', 'Y'), ('Z', 'X'), ('Z', 'Y')]
-14328.333029363768 [('Y', 'X'), ('Z', 'X'), ('Z', 'Y')]
-14328.33302936377 [('Y', 'Z'), ('Y', 'X'), ('Z', 'X')]
-14328.33302936377 [('X', 'Z'), ('Y', 'Z'), ('Y', 'X')]
-14328.33302936377 [('X', 'Y'), ('X', 'Z'), ('Z', 'Y')]
-14328.33302936377 [('X', 'Y'), ('X', 'Z'), ('Y', 'Z')]
-16494.429647593526 [('X', 'Y'), ('Z', 'Y')]
-16497.214254274455 [('Y', 'X'), ('Z', 'X')]
-18745.666363243407 [('Z', 'X'), ('Z', 'Y')]
-18745.66636324341 [('Y', 'Z'), ('Z', 'X')]
-18745.666363243414 [('X', 'Z'), ('Z', 'Y')]
-20911.76298147317 [('Z', 'Y')]
-20911.76298147317 [('Y', 'Z')]
-20914.5475881541 [('Z', 'X')]
-20914.5475881541 [('X', 'Z')]
-20946.18199867026 [('Y', 'X'), ('Z', 'Y')]
-20946.181998670265 [('Y', 'Z'), ('Y', 'X')]
-20946.181998670265 [('X', 'Y'), ('Y', 'Z')]
-20948.96660535119 [('X', 'Y'), ('Z', 'X')]
-20948.966605351194 [('X', 'Z'), ('Y', 'X')]
-20948.966605351194 [('X', 'Y'), ('X', 'Z')]
-23080.64420638386 []
-23115.063223580953 [('Y', 'X')]
-23115.063223580953 [('X', 'Y')]

Once more nodes are involved, one needs to switch to heuristic search. HillClimbSearch implements a greedy local search that starts from the DAG start (default: disconnected DAG) and proceeds by iteratively performing single-edge manipulations that maximally increase the score. The search terminates once a local maximum is found.


In [11]:
from pgmpy.estimators import HillClimbSearch

# create some data with dependencies
data = pd.DataFrame(np.random.randint(0, 3, size=(2500, 8)), columns=list('ABCDEFGH'))
data['A'] += data['B'] + data['C']
data['H'] = data['G'] - data['A']

hc = HillClimbSearch(data, scoring_method=BicScore(data))
best_model = hc.estimate()
print(best_model.edges())


[('A', 'B'), ('A', 'C'), ('B', 'C'), ('G', 'A'), ('G', 'H'), ('H', 'A')]

The search correctly identifies e.g. that B and C do not influnce H directly, only through A and of course that D, E, F are independent.

To enforce a wider exploration of the search space, the search can be enhanced with a tabu list. The list keeps track of the last n modfications; those are then not allowed to be reversed, regardless of the score. Additionally a white_list or black_list can be supplied to restrict the search to a particular subset or to exclude certain edges. The parameter max_indegree allows to restrict the maximum number of parents for each node.

Constraint-based Structure Learning

A different, but quite straightforward approach to build a DAG from data is this:

  1. Identify independencies in the data set using hypothesis tests
  2. Construct DAG (pattern) according to identified independencies

(Conditional) Independence Tests

Independencies in the data can be identified using chi2 conditional independence tests. To this end, constraint-based estimators in pgmpy have a test_conditional_independence(X, Y, Zs)-method, that performs a hypothesis test on the data sample. It allows to check if X is independent from Y given a set of variables Zs:


In [12]:
from pgmpy.estimators import ConstraintBasedEstimator

data = pd.DataFrame(np.random.randint(0, 3, size=(2500, 8)), columns=list('ABCDEFGH'))
data['A'] += data['B'] + data['C']
data['H'] = data['G'] - data['A']
data['E'] *= data['F']

est = ConstraintBasedEstimator(data)

print(est.test_conditional_independence('B', 'H'))          # dependent
print(est.test_conditional_independence('B', 'E'))          # independent
print(est.test_conditional_independence('B', 'H', ['A']))   # independent
print(est.test_conditional_independence('A', 'G'))          # independent
print(est.test_conditional_independence('A', 'G',  ['H']))  # dependent


False
True
True
True
False

test_conditional_independence() returns a tripel (chi2, p_value, sufficient_data), consisting in the computed chi2 test statistic, the p_value of the test, and a heuristig flag that indicates if the sample size was sufficient. The p_value is the probability of observing the computed chi2 statistic (or an even higher chi2 value), given the null hypothesis that X and Y are independent given Zs.

This can be used to make independence judgements, at a given level of significance:


In [13]:
def is_independent(X, Y, Zs=[], significance_level=0.05):
    return est.test_conditional_independence(X, Y, Zs)

print(is_independent('B', 'H'))
print(is_independent('B', 'E'))
print(is_independent('B', 'H', ['A']))
print(is_independent('A', 'G'))
print(is_independent('A', 'G', ['H']))


False
True
True
True
False

DAG (pattern) construction

With a method for independence testing at hand, we can construct a DAG from the data set in three steps:

  1. Construct an undirected skeleton - estimate_skeleton()
  2. Orient compelled edges to obtain partially directed acyclid graph (PDAG; I-equivalence class of DAGs) - skeleton_to_pdag()
  3. Extend DAG pattern to a DAG by conservatively orienting the remaining edges in some way - pdag_to_dag()

Step 1.&2. form the so-called PC algorithm, see [2], page 550. PDAGs are DirectedGraphs, that may contain both-way edges, to indicate that the orientation for the edge is not determined.


In [14]:
skel, seperating_sets = est.estimate_skeleton(significance_level=0.01)
print("Undirected edges: ", skel.edges())

pdag = est.skeleton_to_pdag(skel, seperating_sets)
print("PDAG edges:       ", pdag.edges())

model = est.pdag_to_dag(pdag)
print("DAG edges:        ", model.edges())


Undirected edges:  [('A', 'B'), ('A', 'C'), ('A', 'H'), ('E', 'F'), ('G', 'H')]
PDAG edges:        [('A', 'H'), ('B', 'A'), ('C', 'A'), ('E', 'F'), ('F', 'E'), ('G', 'H')]
DAG edges:         [('A', 'H'), ('B', 'A'), ('C', 'A'), ('F', 'E'), ('G', 'H')]

The estimate()-method provides a shorthand for the three steps above and directly returns a BayesianModel:


In [15]:
print(est.estimate(significance_level=0.01).edges())


[('A', 'H'), ('B', 'A'), ('C', 'A'), ('F', 'E'), ('G', 'H')]

The estimate_from_independencies()-method can be used to construct a BayesianModel from a provided set of independencies (see class documentation for further features & methods):


In [16]:
from pgmpy.independencies import Independencies

ind = Independencies(['B', 'C'],
                     ['A', ['B', 'C'], 'D'])
ind = ind.closure()  # required (!) for faithfulness

model = ConstraintBasedEstimator.estimate_from_independencies("ABCD", ind)

print(model.edges())


[('A', 'D'), ('B', 'D'), ('C', 'D')]

PC PDAG construction is only guaranteed to work under the assumption that the identified set of independencies is faithful, i.e. there exists a DAG that exactly corresponds to it. Spurious dependencies in the data set can cause the reported independencies to violate faithfulness. It can happen that the estimated PDAG does not have any faithful completions (i.e. edge orientations that do not introduce new v-structures). In that case a warning is issued.

Hybrid Structure Learning

The MMHC algorithm [3] combines the constraint-based and score-based method. It has two parts:

  1. Learn undirected graph skeleton using the constraint-based construction procedure MMPC
  2. Orient edges using score-based optimization (BDeu score + modified hill-climbing)

We can perform the two steps seperately, more or less as follows:


In [17]:
from pgmpy.estimators import MmhcEstimator
from pgmpy.estimators import BDeuScore

data = pd.DataFrame(np.random.randint(0, 3, size=(2500, 8)), columns=list('ABCDEFGH'))
data['A'] += data['B'] + data['C']
data['H'] = data['G'] - data['A']
data['E'] *= data['F']

mmhc = MmhcEstimator(data)
skeleton = mmhc.mmpc()
print("Part 1) Skeleton: ", skeleton.edges())

# use hill climb search to orient the edges:
hc = HillClimbSearch(data, scoring_method=BDeuScore(data))
model = hc.estimate(tabu_length=10, white_list=skeleton.to_directed().edges())
print("Part 2) Model:    ", model.edges())


/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ G | ['B', 'C', 'H']. At least 4860 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ F | ['B', 'C', 'H']. At least 4860 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ E | ['B', 'C', 'H']. At least 7290 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ D | ['B', 'C', 'H']. At least 4860 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ F | ['B', 'H', 'G']. At least 4860 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ F | ['C', 'H', 'G']. At least 4860 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ F | ['B', 'C', 'H', 'G']. At least 14580 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ E | ['B', 'H', 'G']. At least 7290 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ E | ['C', 'H', 'G']. At least 7290 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ E | ['B', 'C', 'H', 'G']. At least 21870 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ D | ['B', 'H', 'G']. At least 4860 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ D | ['C', 'H', 'G']. At least 4860 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing A _|_ D | ['B', 'C', 'H', 'G']. At least 14580 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing B _|_ G | ['A', 'H', 'C']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing B _|_ F | ['A', 'H', 'C']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing B _|_ E | ['A', 'H', 'C']. At least 5670 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing B _|_ D | ['A', 'H', 'C']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing C _|_ G | ['A', 'H', 'B']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing C _|_ F | ['A', 'H', 'B']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing C _|_ E | ['A', 'H', 'B']. At least 5670 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing C _|_ D | ['A', 'H', 'B']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing D _|_ A | ['E', 'H', 'B']. At least 6480 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing E _|_ H | ['F', 'C', 'D']. At least 3240 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing E _|_ A | ['F', 'C', 'H']. At least 7290 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing E _|_ A | ['F', 'D', 'H']. At least 7290 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing E _|_ A | ['C', 'D', 'H']. At least 7290 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing E _|_ A | ['F', 'C', 'D', 'H']. At least 21870 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing E _|_ B | ['F', 'C', 'D', 'H']. At least 7290 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing E _|_ G | ['F', 'C', 'D', 'H']. At least 7290 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing F _|_ H | ['E', 'C', 'D']. At least 2880 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing G _|_ B | ['H', 'C', 'A']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing G _|_ F | ['H', 'C', 'A']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing G _|_ E | ['H', 'C', 'A']. At least 5670 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing G _|_ D | ['H', 'C', 'A']. At least 3780 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing H _|_ C | ['G', 'B', 'A']. At least 5040 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing H _|_ F | ['G', 'B', 'A']. At least 5040 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing H _|_ E | ['G', 'A']. At least 2520 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing H _|_ E | ['B', 'A']. At least 2520 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing H _|_ E | ['G', 'B', 'A']. At least 7560 samples recommended, 2500 present.
  5 * num_params, len(data)
/home/ankur/pgmpy_notebook/notebooks/pgmpy/estimators/CITests.py:95: UserWarning: Insufficient data for testing H _|_ D | ['G', 'B', 'A']. At least 5040 samples recommended, 2500 present.
  5 * num_params, len(data)
Part 1) Skeleton:  [('A', 'C'), ('A', 'H'), ('B', 'C'), ('E', 'F'), ('G', 'H')]
Part 2) Model:     [('A', 'H'), ('C', 'A'), ('F', 'E'), ('G', 'H')]

MmhcEstimator.estimate() is a shorthand for both steps and directly estimates a BayesianModel.

Conclusion

This notebook aimed to give an overview of pgmpy's estimators for learning Bayesian network structure and parameters. For more information about the individual functions see their docstring documentation. If you used pgmpy's structure learning features to satisfactorily learn a non-trivial network from real data, feel free to drop us an eMail via the mailing list or just open a Github issue. We'd like to put your network in the examples-section!

References

[1] Koller & Friedman, Probabilistic Graphical Models - Principles and Techniques, 2009

[2] Neapolitan, Learning Bayesian Networks, 2003

[3] Tsamardinos et al., The max-min hill-climbing BN structure learning algorithm, 2005